245 research outputs found
Literature-Augmented Clinical Outcome Prediction
We present BEEP (Biomedical Evidence-Enhanced Predictions), a novel approach
for clinical outcome prediction that retrieves patient-specific medical
literature and incorporates it into predictive models. Based on each individual
patient's clinical notes, we train language models (LMs) to find relevant
papers and fuse them with information from notes to predict outcomes such as
in-hospital mortality. We develop methods to retrieve literature based on
noisy, information-dense patient notes, and to augment existing outcome
prediction models with retrieved papers in a manner that maximizes predictive
accuracy. Our approach boosts predictive performance on three important
clinical tasks in comparison to strong recent LM baselines, increasing F1 by up
to 5 points and precision@Top-K by a large margin of over 25%.Comment: To appear in Findings of NAACL 2022. Code available at:
https://github.com/allenai/BEE
Open Domain Multi-document Summarization: A Comprehensive Study of Model Brittleness under Retrieval
Multi-document summarization (MDS) assumes a set of topic-related documents
are provided as input. In practice, this document set is not always available;
it would need to be retrieved given an information need, i.e. a question or
topic statement, a setting we dub "open-domain" MDS. We study this more
challenging setting by formalizing the task and bootstrapping it using existing
datasets, retrievers and summarizers. Via extensive automatic and human
evaluation, we determine: (1) state-of-the-art summarizers suffer large
reductions in performance when applied to open-domain MDS, (2) additional
training in the open-domain setting can reduce this sensitivity to imperfect
retrieval, and (3) summarizers are insensitive to the retrieval of duplicate
documents and the order of retrieved documents, but highly sensitive to other
errors, like the retrieval of irrelevant documents. Based on our results, we
provide practical guidelines to enable future work on open-domain MDS, e.g. how
to choose the number of retrieved documents to summarize. Our results suggest
that new retrieval and summarization methods and annotated resources for
training and evaluation are necessary for further progress in the open-domain
setting.Comment: Accepted to EMNLP Findings 202
- …